Learning Stochastic Graph Neural Networks With Constrained Variance
نویسندگان
چکیده
Stochastic graph neural networks (SGNNs) are information processing architectures that learn representations from data over random graphs. SGNNs trained with respect to the expected performance, which comes no guarantee about deviations of particular output realizations around optimal expectation. To overcome this issue, we propose a variance-constrained optimization problem for SGNNs, balancing performance and stochastic deviation. An alternating primal-dual learning procedure is undertaken solves by updating SGNN parameters gradient descent dual variable ascent. characterize explicit effect learning, conduct theoretical analysis on variance identify trade-off between robustness discrimination power. We further analyze duality gap converging behavior procedure. The former indicates optimality loss induced transformation latter characterizes limiting error iterative algorithm, both learning. Through numerical simulations, corroborate our findings observe strong controllable standard
منابع مشابه
Stochastic Training of Graph Convolutional Networks with Variance Reduction
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have a convergence guarantee, and their receptive fi...
متن کاملFew-Shot Learning with Graph Neural Networks
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recentl...
متن کاملLearning Stochastic Feedforward Neural Networks
Multilayer perceptrons (MLPs) or neural networks are popular models used for nonlinear regression and classification tasks. As regressors, MLPs model the conditional distribution of the predictor variables Y given the input variables X . However, this predictive distribution is assumed to be unimodal (e.g. Gaussian). For tasks involving structured prediction, the conditional distribution should...
متن کاملDeep Neural Networks for Learning Graph Representations
In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the samplingbased method for generating linear sequence...
متن کاملStochastic Neural Networks for Hierarchical Reinforcement Learning
Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Signal Processing
سال: 2023
ISSN: ['1053-587X', '1941-0476']
DOI: https://doi.org/10.1109/tsp.2023.3244101